104 research outputs found

    The Impact of Flow in an EEG-based Brain Computer Interface

    Get PDF
    Major issues in Brain Computer Interfaces (BCIs) include low usability and poor user performance. This paper tackles them by ensuring the users to be in a state of immersion, control and motivation, called state of flow. Indeed, in various disciplines, being in the state of flow was shown to improve performances and learning. Hence, we intended to draw BCI users in a flow state to improve both their subjective experience and their performances. In a Motor Imagery BCI game, we manipulated flow in two ways: 1) by adapting the task difficulty and 2) by using background music. Results showed that the difficulty adaptation induced a higher flow state, however music had no effect. There was a positive correlation between subjective flow scores and offline performance, although the flow factors had no effect (adaptation) or negative effect (music) on online performance. Overall, favouring the flow state seems a promising approach for enhancing users' satisfaction, although its complexity requires more thorough investigations

    Canonical Source Reconstruction for MEG

    Get PDF
    We describe a simple and efficient solution to the problem of reconstructing electromagnetic sources into a canonical or standard anatomical space. Its simplicity rests upon incorporating subject-specific anatomy into the forward model in a way that eschews the need for cortical surface extraction. The forward model starts with a canonical cortical mesh, defined in a standard stereotactic space. The mesh is warped, in a nonlinear fashion, to match the subject's anatomy. This warping is the inverse of the transformation derived from spatial normalization of the subject's structural MRI image, using fully automated procedures that have been established for other imaging modalities. Electromagnetic lead fields are computed using the warped mesh, in conjunction with a spherical head model (which does not rely on individual anatomy). The ensuing forward model is inverted using an empirical Bayesian scheme that we have described previously in several publications. Critically, because anatomical information enters the forward model, there is no need to spatially normalize the reconstructed source activity. In other words, each source, comprising the mesh, has a predetermined and unique anatomical attribution within standard stereotactic space. This enables the pooling of data from multiple subjects and the reporting of results in stereotactic coordinates. Furthermore, it allows the graceful fusion of fMRI and MEG data within the same anatomical framework

    Theoretical analysis of xDAWN algorithm: application to an efficient sensor selection in a P300 BCI

    Get PDF
    International audienceA Brain-Computer Interface (BCI) is a specific type of human-machine interface that enables communication between a subject/patient and a computer by a direct control from the decoding of brain activity. To improve the ergonomics and to minimize the cost of such a BCI, reducing the number of electrodes is mandatory. A theoretical analysis of the subjacent model induced by the BCI paradigm leads to derive a closed form theoretical expression of the spatial filters which maximize the signal to signal-plus-noise ratio. Moreover, this new formulation is useful to improve a previously introduced method to automatically select relevant sensors. Experimental results on 20 subjects show that the proposed method is efficient to select the most relevant sensors: from 32 down to 8 sensors, the loss in classification accuracy is less than 2%. Furthermore, the computational time required to rank the 32 sensors is reduced by a 4.6 speed up factor allowing dynamical monitoring of sensor relevance as a marker of the user's mental state

    Towards an actor-based model of the neurofeedback/BCI closed-loop

    Get PDF
    International audienceNeurofeedback training describes a closed-loop paradigm in which a Brain-Computer Interface is typically used to provide a subject with an evaluation of his/her own mental states. As a learning process, it aims at enabling the subject to apprehend his or her own latentcognitive states in order to modulate it. Its use for therapeutic purposes has gained a lot of traction in the public sphere in the last decade, but conflicting evidence concerning its efficacy has led to increasing efforts by the scientific community to provide better explanations for the cognitive mechanisms at work. We intend to contribute to this effort by proposing a mathematical formalization of the mechanisms at play in this (arguably) quite complex dynamical system.Due to the subjective nature of the task, a representation of the subject and experimenter separate beliefs and hypothesis is an important first step to propose a meaningful approximation. We provide a first model of the training loop based on those considerations, introducing two pipelines. The direct pipeline (subject-> feedback) makes use of a coupling between cognitive and physiological states to infer latent cognitive states from measurement. The return pipeline (feedback-> subject) describes how perception of the indicator impacts subject behaviour. To describe the behaviour of an agent facing an uncertain environment, we make use of the Active Inference framework (1), a bayesian approach to belief updating that provides a biologically plausible model of perception, action and learning. The ensuing model is then leveraged to simulate computationally the behaviour and evolvingbeliefs of a neurofeedback training subject in tasks of varying nature and difficulty. We finally analyze the effects of several sources of error such as measurement noise or uncertainty surrounding the choice of the biomarker to conclude on their influence on training efficacy

    Modeling subject perception and behaviour during neurofeedback training

    Get PDF
    International audienceNeurofeedback training (NFT) describes a closed-loop paradigm in which a subject is provided with a real time evaluation of his/her brain activity. As a learning process, it is designed to help the subject learn to apprehend his/her own cognitive states and better modulate them through mental actions. Its use for therapeutic purposes has gained a lot of traction in the public sphere in the last decade, but conflicting evidence concerning its efficacy has led to a two-pronged effort from the scientific community. First, a call for experimental protocols and reports standardization [1], aiming to reduce the variability of the results and provide a reliable set of data to describe empirical findings. Second, an effort towards a formal description of the neurofeedback loop and the main hypotheses that guide the design of our experiments, in order to explain or even predict the effects of such training [2,3]

    A generic framework for adaptive EEG-based BCI training and operation

    Get PDF
    International audienceThere are numerous possibilities and motivations for an adaptive BCI, which may not be easy to clarify and organize for a newcomer to the field. To our knowledge, there has not been any work done in classifying the literature on adaptive BCI in a comprehensive and structured way. We propose a conceptual framework, a taxonomy of adaptive BCI methods which encompasses most important approaches to fit them in such a way that a reader can clearly visualize which elements are being adapted and for what reason. In the interest of having a clear review of existing adaptive BCIs, this framework considers adaptation approaches for both the user and the machine, i.e., using instructional design observations as well as the usual machine learning techniques. This framework not only provides a coherent review of such extensive literature but also enables the reader to perceive gaps and flaws in the current BCI systems, which would hopefully bring novel solutions for an overall improvement

    Dynamics of oddball sound processing: Trial-by-trial modeling of ECoG signals

    Get PDF
    Recent computational models of perception conceptualize auditory oddball responses as signatures of a (Bayesian) learning process, in line with the influential view of the mismatch negativity (MMN) as a prediction error signal. Novel MMN experimental paradigms have put an emphasis on neurophysiological effects of manipulating regularity and predictability in sound sequences. This raises the question of the contextual adaptation of the learning process itself, which on the computational side speaks to the mechanisms of gain-modulated (or precision-weighted) prediction error. In this study using electrocorticographic (ECoG) signals, we manipulated the predictability of oddball sound sequences with two objectives: (i) Uncovering the computational process underlying trial-by-trial variations of the cortical responses. The fluctuations between trials, generally ignored by approaches based on averaged evoked responses, should reflect the learning involved. We used a general linear model (GLM) and Bayesian Model Reduction (BMR) to assess the respective contributions of experimental manipulations and learning mechanisms under probabilistic assumptions. (ii) To validate and expand on previous findings regarding the effect of changes in predictability using simultaneous EEG-MEG recordings. Our trial-by-trial analysis revealed only a few stimulus-responsive sensors but the measured effects appear to be consistent over subjects in both time and space. In time, they occur at the typical latency of the MMN (between 100 and 250 ms post-stimulus). In space, we found a dissociation between time-independent effects in more anterior temporal locations and time-dependent (learning) effects in more posterior locations. However, we could not observe any clear and reliable effect of our manipulation of predictability modulation onto the above learning process. Overall, these findings clearly demonstrate the potential of trial-to-trial modeling to unravel perceptual learning processes and their neurophysiological counterparts

    Simple Probabilistic Data-driven Model for Adaptive BCI Feedback

    Get PDF
    International audienceDue to abundant signal and user variability among others, BCIs remain difficult to control. To increase performance, adaptive methods are a necessary means to deal with such a vast spectrum of variable data. Typically, adaptive methods deal with the signal or classification corrections (adaptive spatial filters [1], co-adaptive calibration [2], adaptive classifiers [3]). As such, they do not necessarily account for the implicit alterations they perform on the feedback (in real-time), and in turn, on the user, creating yet another potential source of unpredictable variability. Namely, certain user's personality traits and states have shown to correlate with BCI performance, while feedback can impact user states [4]. For instance, altered (biased) feedback was distorting the participants' perception over their performance, influencing their feeling of control, and online performance [5]. Thus, one can assume that through feedback we might implicitly guide the user towards a desired state beneficial for BCI performance. We propose a novel, simple probabilistic, data-driven dynamic model to provide such feedback that will maximize performance

    The impact of MEG source reconstruction method on source-space connectivity estimation: A comparison between minimum-norm solution and beamforming.

    Get PDF
    Despite numerous important contributions, the investigation of brain connectivity with magnetoencephalography (MEG) still faces multiple challenges. One critical aspect of source-level connectivity, largely overlooked in the literature, is the putative effect of the choice of the inverse method on the subsequent cortico-cortical coupling analysis. We set out to investigate the impact of three inverse methods on source coherence detection using simulated MEG data. To this end, thousands of randomly located pairs of sources were created. Several parameters were manipulated, including inter- and intra-source correlation strength, source size and spatial configuration. The simulated pairs of sources were then used to generate sensor-level MEG measurements at varying signal-to-noise ratios (SNR). Next, the source level power and coherence maps were calculated using three methods (a) L2-Minimum-Norm Estimate (MNE), (b) Linearly Constrained Minimum Variance (LCMV) beamforming, and (c) Dynamic Imaging of Coherent Sources (DICS) beamforming. The performances of the methods were evaluated using Receiver Operating Characteristic (ROC) curves. The results indicate that beamformers perform better than MNE for coherence reconstructions if the interacting cortical sources consist of point-like sources. On the other hand, MNE provides better connectivity estimation than beamformers, if the interacting sources are simulated as extended cortical patches, where each patch consists of dipoles with identical time series (high intra-patch coherence). However, the performance of the beamformers for interacting patches improves substantially if each patch of active cortex is simulated with only partly coherent time series (partial intra-patch coherence). These results demonstrate that the choice of the inverse method impacts the results of MEG source-space coherence analysis, and that the optimal choice of the inverse solution depends on the spatial and synchronization profile of the interacting cortical sources. The insights revealed here can guide method selection and help improve data interpretation regarding MEG connectivity estimation
    corecore